The spatial extent of ion cyclotron waves at Io has been interpreted as requiring a multistep acceleration and transport process: exospheric ions are accelerated outward (relative to Jupiter) due to the corotation electric field, neutralized due to charge exchange in the surrounding exosphere, and then reionized after traveling far across magnetic field lines, at which point they generate the waves. The trajectories of the particles away from Io are sensitive to the location of their initial ionization. This paper examines the spatial distributions of fast neutrals produced under varying conditions in order to provide constraints on the possible structure and nature of the Io exosphere. While a rapid onset of cyclotron waves at a specific location around Io can be modeled with a single, point-source region of ions, such as might occur over a volcano, the regional extent of the waves suggests multiple or distributed sources. 相似文献
Summary. A first-order form of the Euler's equations for rays in an ellipsoidal model of the Earth is obtained. The conditions affecting the velocity law for a monotonic increase, with respect to the arc length, in the angular distance to the epicentre, and in the angle of incidence, are the same in the ellipsoidal and spherical models. It is therefore possible to trace rays and to compute travel times directly in an ellipsoidal earth as in the spherical model. Thus comparison with the rays of the same coordinates in a spherical earth provides an estimate of the various deviations of these rays due to the Earth's flattening, and the corresponding travel-time differences, for mantle P -waves and for shallow earthquakes. All these deviations are functions both of the latitude and of the epicentral distance. The difference in the distance to the Earth's centre at points with the same geocentric latitude on rays in the ellipsoidal and in the spherical model may reach several kilometres. Directly related to the deformation of the isovelocity surfaces, this difference is the only cause of significant perturbation in travel times. Other differences, such as that corresponding to the ray torsion, are of the first order in ellipticity, and may exceed 1 km. They induce only small differences in travel time, less than 0.01s. Thus, we show that the ellipticity correction obtained by Jeffreys (1935) and Bullen (1937) by a perturbational method can be recovered by a direct evaluation of the travel times in an ellipsoidal model of the Earth. Moreover, as stated by Dziewonski & Gilbert (1976), we verify the non-dependence of this correction on the choice of the velocity law. 相似文献
The Rayleigh wave phase and group velocities in the period range of 24–39 sec, obtained from two earthquakes which occurred in northeastern brazil and which were recorded by the Brazilian seismological station RDJ (Rio de Janeiro), have been used to study crustal and upper mantle structures of the Brazilian coastal region. Three crustal and upper mantle models have been tried out to explain crustal and upper mantle structures of the region. The upper crust has not been resolved, due basically to the narrow period range of the phase and group velocities data. The phase velocity inversions have exhibited good resolutions for both lower crust and upper mantle, with shear wave velocities characteristic of these regions. The group velocity data inversions for these models have showed good results only for the lower crust. The shear wave velocities of the lower crust (3.86 and 3.89 km/sec), obtained with phase velocity inversions, are similar to that (=3.89 km/sec) found byHwang (1985) to the eastern South American region, while group velocity inversions have presented shear velocity (=3.75 km/sec) similar to that (=3.78 km/sec) found byLazcano (1972) to the Brazilian shield. It was not possible to define sharply the crust-mantle transition, but an analysis of the phase and group velocity inversions results has indicated that the total thickness of the crust should be between 30 and 39 km. The crustal and upper mantle model, obtained with phase velocity inversion, can be used as a preliminary model for the Brazilian coast. 相似文献
Monitoring groundwater quality by cost-effective techniques is important as the aquifers are vulnerable to contamination from
the uncontrolled discharge of sewage, agricultural and industrial activities. Faulty planning and mismanagement of irrigation
schemes are the principle reasons of groundwater quality deterioration. This study presents an artificial neural network (ANN)
model predicting concentration of nitrate, the most common pollutant in shallow aquifers, in groundwater of Harran Plain.
The samples from 24 observation wells were monthly analysed for 1 year. Nitrate was found in almost all groundwater samples
to be significantly above the maximum allowable concentration of 50 mg/L, probably due to the excessive use of artificial
fertilizers in intensive agricultural activities. Easily measurable parameters such as temperature, electrical conductivity,
groundwater level and pH were used as input parameters in the ANN-based nitrate prediction. The best back-propagation (BP)
algorithm and neuron numbers were determined for optimization of the model architecture. The Levenberg–Marquardt algorithm
was selected as the best of 12 BP algorithms and optimal neuron number was determined as 25. The model tracked the experimental
data very closely (R = 0.93). Hence, it is possible to manage groundwater resources in a more cost-effective and easier way with the proposed
model application. 相似文献
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies.
Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making. 相似文献
Geospatial technology is increasing in demand for many applications in geosciences. Spatial variability of the bed/hard rock
is vital for many applications in geotechnical and earthquake engineering problems such as design of deep foundations, site
amplification, ground response studies, liquefaction, microzonation etc. In this paper, reduced level of rock at Bangalore,
India is arrived from the 652 boreholes data in the area covering 220 km2. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability
of the rock depth, Geostatistical model based on Ordinary Kriging technique, Artificial Neural Network (ANN) and Support Vector
Machine (SVM) models have been developed. In Ordinary Kriging, the knowledge of the semi-variogram of the reduced level of
rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of the Bangalore,
where field measurements are not available. A new type of cross-validation analysis developed proves the robustness of the
Ordinary Kriging model. ANN model based on multi layer perceptrons (MLPs) that are trained with Levenberg–Marquardt backpropagation
algorithm has been adopted to train the model with 90% of the data available. The SVM is a novel type of learning machine
based on statistical learning theory, uses regression technique by introducing loss function has been used to predict the
reduced level of rock from a large set of data. In this study, a comparative study of three numerical models to predict reduced
level of rock has been presented and discussed. 相似文献